14,579 research outputs found

    The economic costs of road traffic congestion

    Get PDF
    The main cause of road traffic congestion is that the volume of traffic is tooclose to the maximum capacity of a road or network. Congestion in the UK isworse than many, perhaps most, other European countries. More important, itis getting worse, year by year. Current official forecasts imply that congestionwill be substantially worse by the end of this decade, even on the veryfavourable assumption that all current Government projects and policies areimplemented in full, successfully, and to time. This is because road traffic isgrowing faster than road capacity. This is not a temporary problem: it willcontinue to be the case, in the absence of measures to reduce traffic, because itis infeasible to match a road programme to unrestricted trends in traffic growth.The effect, using the current Government method of measuring congestion,and a long established method of valuing it, would be that the widely quotedfigure of an annual cost of £20 billion, would increase to £30 billion by 2010.Under current social and economic frameworks, there are no feasible policiesthat could reduce congestion to zero in practice, or that would be worthwhiledoing in theory. But savings worth £4b-£6b a year could in principle be madeby congestion charging alone, over the whole network, of which (veryapproximately) half might be reflected in the prices of goods, and half insavings in individuals? own time spent travelling. A good proportion of thiscould alternatively be secured by an appropriate package of alternativemeasures: priority lanes and signalling; switching to other modes includingfreight to rail and passenger movements to public transport, walking andcycling; ?soft? policies to encourage reduced travel by car; land-use patternswhich reduce unnecessary travel; and associated measures to prevent benefitsfrom being eroded by induced travel. The combined effects of road chargingand a supportive set of complementary measures represent the best that couldbe reasonably achieved in the short to medium run. This could reducecongestion costs (as distinct from slowing down their increase) by 40%-50%.These broad-brush figures, though based on long-established methods, must betreated with great caution. The ?cost of congestion?, as used for thesecalculations, is based on relationships which in reality are not exact, stable oreven meaningful. The wrong indicator has been used, comparing average realspeeds with average ideal speeds. But in the real world, speeds are differentevery day, and so is the level of congestion. For just-in-time operation, and formuch personal and business travel, variability and reliability are much moreimportant. The really costly effect of congestion is not the slightly increasedaverage time, but the greater than average effect in particular locations andmarkets, and the greatly increased unreliability.During the near future, until road pricing is implemented, increases in roadcongestion can lead to some shift in the balance of attractiveness of rail freight,sufficient for a proportion of the freight market to transfer from road. Thiswould in turn make a small but significant contribution to reducing congestion,especially in some specific important corridors. Even though rail freight isusually a small proportion of all freight, the annual economic saving incongestion cost, to road users generally, from transferring a 5-times a week,200 mile round trip, mostly on congested motorways, from road to rail wouldbe in the order of £40,000 to £80,000, to which should be added thecommercial cost savings made by the freight operator who chooses to do so. Itshould be emphasised that sustaining this would require measures to preventinduced car traffic filling up the relieved road space.An example of the impact of factoring in unreliability is given by approximatecalculations made for journeys such as Glasgow to Newcastle, Cardiff toDover, or London to Manchester. In free-flow theory these could be 3-hourjourneys, but moderate congestion requires adding an hour to the average timeand another hour safety margin to ensure that a tight delivery slot is not missedtoo often. In congestion so severe as to double the average time, the extrasafety margin for unreliability could be as much as 4 hours, which is simply notfeasible in many cases.The ?total cost of congestion? is a large number, but it is practicallymeaningless and by ?devaluing the currency? it distracts attention from moreimportant, achievable, objectives. It would be better not to use it as a target forpolicy. The two key important things to do are:· Strategic action to reduce traffic volume to a level where conditions do notvary too much from day to day. In some circumstances this will slightlyincrease average speed, though not always: in some road conditions areduction of average speed can greatly improve the smoothness of trafficflow. But in both cases, it will greatly increase reliability, this being moreimportant than the change in average speed;· Practical measures to provide good alternatives for freight and passengermovements which reduce the intensity of use of scarce road space incongested conditions. Even where this only applies to a minority ofmovements, significant effects are possible.The Government plans to ?re-launch? the Ten Year Plan for Transport thisSummer or Autumn. It is not reasonable to expect that the re-launch willinclude congestion charging for cars within the decade, so it will need to planfor it as soon as possible after, and a short-term coping strategy of prioritymeasures to protect the most important classes of movement (both passengerand freight) from congestion in the period before charging is implemented

    Valuing the small: countingthe benefits

    Get PDF
    This paper contains part of a reportcommissioned by a consortium oforganisations concerned with thesuccessful development of sustainabletransport strategies, and drafted byProfessor Phil Goodwin of UCL. It followsa report, Less Traffic Where People Live,which, using case studies of experiencehere and elsewhere in Europe hasdemonstrated that small-scale, or ?softfactors? can be effective in tacklingtransport problems, especially when usedin combination. Such examples includebus priority schemes, measures forimproving walking and cycling, trafficcalming, car clubs, school and workplacetravel plans, and the use of personalisedadvice and information to assist people inreducing the congestion and pollution theycause. The DfT?s report Smarter Choices ?Changing the Way we Travel, has alsohighlighted the significant potential whichexists to reduce traffic and congestion,providing soft factors are accompanied bysupporting measures to manage demand.The DfT has established a unit dedicatedto developing experience on soft factors,including on appraisal. Coupled with therecent report, there is a gainingmomentum behind expanding the role ofsoft factors in transport policy. These areall initiatives which are supported bynational and local government, and onwhich the sponsoring organisations havein recent years become active advisers aswell as campaigners.Taken together, such relatively cheap andpotentially popular initiatives are not onlypowerful contributions to theGovernment?s transport strategy: they arealso the leading examples of initiativeswhich can produce improvements swiftly? an important consideration both forpolitical reasons, and also in order toproduce the momentum and consensusfor longer term initiatives.This attractive combination of relativecheapness, environmental advantage,demonstrated successes in good practice,and speed of delivery would ? one mightthink ? lead to such policies being very highprofile indeed. However, this is not alwaysthe case. The problem this report addressesis reflected in recurrent concerns that themerits of such initiatives are overshadowedby the bigger, longer-term, much moreambitious ? and often much morecontroversial ??big? policies: especiallymassive rail or road infrastructure projects.In some ways it is natural that the ?big?initiatives should receive more attentionthan the ?small?, especially in view of along period of inadequate or distortedinvestment. But taken too far, this can becounter-productive. The question thisreport addresses is whether there is somesystematic reason, deep in the appraisaland forecasting methods, which preventsperfectly good initiatives receiving theattention and funding they deserve. Thesuggestion is that there are indeed someimportant biases of this kind, and thatsorting them out will have very helpfuleffects in avoiding wasted opportunitiesand accelerating delivery.This report addresses the followingquestions and is intended to be a helpfulcontribution to this area of work:> what are the barriers that prevent thesmall, good value-for-money schemesbeing taken up with greaterenthusiasm than the big, poor valuefor-money projects?> are there ways of restoring a balancedimplementation process?It is obvious that such barriers will includepolitical and ideological considerations,and the role of vested interests, but theyare not the focus of this report. Rather,the concern is that there may beweaknesses in the process of appraisaland assessment, preceding anyimplementation, which produce a biasagainst the small schemes. This processis intended to resolve practical questionsof design, economic questions of value formoney, planning questions of consistency,and the relationship between short andlong term objectives: it depends on a setof formal procedures and practices ?surveys, models, forecasts, appraisalframeworks ? built up over many years,and originating in the economic costbenefitanalyses whose principles andbasic features were established in the1960s and 1970s.The suggestion is made that there aresome in-built biases in current appraisaltechniques ? developed, as they were, ina different time and for a different agenda? which discriminate against some of thebest measures, and for some of the leasteffective

    The ellipticities of Galactic and LMC globular clusters

    Get PDF
    The globular clusters of the LMC are found to be significantly more elliptical than Galactic globular clusters, but very similar in virtually all other respects. The ellipticity of the LMC globular clusters is shown not be correlated with the age or mass of those clusters. It is proposed that the ellipticity differences are caused by the different strengths of the tidal fields in the LMC and the Galaxy. The strong Galactic tidal field erases initial velocity anisotropies and removes angular momentum from globular clusters making them more spherical. The tidal field of the LMC is not strong enough to perform these tasks and its globular clusters remain close to their initial states.Comment: 3 pages LaTeX file with 3 figures incorporated accepted for publication in MNRAS. Also available by e-mailing spg, or by ftp from ftp://star-www.maps.susx.ac.uk/pub/papers/spg/ellip.ps.

    Structuring the decision process : an evaluation of methods in the structuring the decision process

    Get PDF
    This chapter examines the effectiveness of methods that are designed to provide structure and support to decision making. Those that are primarily aimed at individual decision makers are examined first and then attention is turned to groups. In each case weaknesses of unaided decision making are identified and how successful the application of formal methods is likely to be in mitigating these weaknesses is assessed

    Usuda Deep Space Center support for ICE

    Get PDF
    The planning, implementation and operations that took place to enable the Usuda, Japan, Deep Space Center to support the International Cometary Explorer (ICE) mission are summarized. The results show that even on very short notification our two countries can provide mutual support to help ensure mission success. The data recovery at the Usuda Deep Space Center contributed significantly to providing the required continuity of the experimental data stream at the encounter of the Comet Giacobini-Zinner

    Where's the Working Class?

    Get PDF
    From the Communist Manifesto onwards, the self-emancipation of the working class was central to Marx’s thought. And so it was for subsequent generations of Marxists including the later Engels, the pre-WW1 Kautsky, Lenin, Luxemburg, Trotsky and Gramsci. But in much contemporary Marxist theory the active role of the working class seems at the least marginal and at the most completely written off. This article traces the perceived role of the working class in Marxist theory, from Marx and Engels, through the Second and Third Internationals, Stalinism and Maoism, through to the present day. It situates this in political developments changes in the nature of the working class over the last 200 years. It concludes by suggesting a number of questions about Marxism and the contemporary working class that anyone claiming to be a Marxist today needs to answer

    Evidence for the Strong Effect of Gas Removal on the Internal Dynamics of Young Stellar Clusters

    Get PDF
    We present detailed luminosity profiles of the young massive clusters M82-F, NGC 1569-A, and NGC 1705-1 which show significant departures from equilibrium (King and EFF) profiles. We compare these profiles with those from N-body simulations of clusters which have undergone the rapid removal of a significant fraction of their mass due to gas expulsion. We show that the observations and simulations agree very well with each other suggesting that these young clusters are undergoing violent relaxation and are also losing a significant fraction of their stellar mass. That these clusters are not in equilibrium can explain the discrepant mass-to-light ratios observed in many young clusters with respect to simple stellar population models without resorting to non-standard initial stellar mass functions as claimed for M82-F and NGC 1705-1. We also discuss the effect of rapid gas removal on the complete disruption of a large fraction of young massive clusters (``infant mortality''). Finally we note that even bound clusters may lose >50% of their initial stellar mass due to rapid gas loss (``infant weight-loss'').Comment: 6 pages, 3 figures, MNRAS letters, accepte

    Simulating star formation in molecular cloud cores I. The influence of low levels of turbulence on fragmentation and multiplicity

    Get PDF
    We present the results of an ensemble of simulations of the collapse and fragmentation of dense star-forming cores. We show that even with very low levels of turbulence the outcome is usually a binary, or higher-order multiple, system. We take as the initial conditions for these simulations a typical low-mass core, based on the average properties of a large sample of observed cores. All the simulated cores start with a mass of M=5.4MM = 5.4 M_{\odot}, a flattened central density profile, a ratio of thermal to gravitational energy αtherm=0.45\alpha_{\rm therm} = 0.45 and a ratio of turbulent to gravitational energy αturb=0.05\alpha_{\rm turb} = 0.05 . Even this low level of turbulence is sufficient to produce multiple star formation in 80% of the cores; the mean number of stars and brown dwarfs formed from a single core is 4.55, and the maximum is 10. At the outset, the cores have no large-scale rotation. The only difference between each individual simulation is the detailed structure of the turbulent velocity field. The multiple systems formed in the simulations have properties consistent with observed multiple systems. Dynamical evolution tends preferentially to eject lower mass stars and brown dwarves whilst hardening the remaining binaries so that the median semi-major axis of binaries formed is 30\sim 30 au. Ejected objects are usually single low-mass stars and brown dwarfs, yielding a strong correlation between mass and multiplicity. Our simulations suggest a natural mechanism for forming binary stars that does not require large-scale rotation, capture, or large amounts of turbulence.Comment: 20 pages, 12 figures submitted to A&

    Star Cluster Survival in Star Cluster Complexes under Extreme Residual Gas Expulsion

    Full text link
    After the stars of a new, embedded star cluster have formed they blow the remaining gas out of the cluster. Especially winds of massive stars and definitely the on-set of the first supernovae can remove the residual gas from a cluster. This leads to a very violent mass-loss and leaves the cluster out of dynamical equilibrium. Standard models predict that within the cluster volume the star formation efficiency (SFE) has to be about 33 per cent for sudden (within one crossing-time of the cluster) gas expulsion to retain some of the stars in a bound cluster. If the efficiency is lower the stars of the cluster disperse mostly. Recent observations reveal that in strong star bursts star clusters do not form in isolation but in complexes containing dozens and up to several hundred star clusters, i.e. in super-clusters. By carrying out numerical experiments for such objects placed at distances >= 10 kpc from the centre of the galaxy we demonstrate that under these conditions (i.e. the deeper potential of the star cluster complex and the merging process of the star clusters within these super-clusters) the SFEs can be as low as 20 per cent and still leave a gravitationally bound stellar population. Such an object resembles the outer Milky Way globular clusters and the faint fuzzy star clusters recently discovered in NGC 1023.Comment: 21 pages, 8 figures, accepted by Ap
    corecore